在免赠款稀疏代码多访问(GF-SCMA)系统中,主动用户检测(AUD)是一个主要的性能瓶颈,因为它涉及复杂的组合问题,这使用户和接收器的争夺资源的联合设计是至关重要的,但是一个具有挑战性的问题。为此,我们建议对编码器侧的两个序列生成网络(PGN)和解码器端的数据辅助AUD进行基于自动编码器(AE)的关节优化。提出的AE的核心体系结构是解码器中新型的用户活动提取网络(UAEN),该网络从SCMA CodeWord数据中提取先验用户活动信息,以获取数据辅助AUD。对拟议的AE进行的端到端培训可以使争夺资源的联合优化,即序列序列,每个序列,每个序列与其中一本代码书关联,并从序言和基于SCMA的数据传输中提取用户活动信息。此外,我们在端到端培训之前为UAEN提出了一个自制的预训练计划,以确保AE网络内部深处的UAEN的收敛性。仿真结果表明,与基于最先进的DL的AUD方案相比。
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
ICECUBE是一种用于检测1 GEV和1 PEV之间大气和天体中微子的光学传感器的立方公斤阵列,该阵列已部署1.45 km至2.45 km的南极的冰盖表面以下1.45 km至2.45 km。来自ICE探测器的事件的分类和重建在ICeCube数据分析中起着核心作用。重建和分类事件是一个挑战,这是由于探测器的几何形状,不均匀的散射和冰中光的吸收,并且低于100 GEV的光,每个事件产生的信号光子数量相对较少。为了应对这一挑战,可以将ICECUBE事件表示为点云图形,并将图形神经网络(GNN)作为分类和重建方法。 GNN能够将中微子事件与宇宙射线背景区分开,对不同的中微子事件类型进行分类,并重建沉积的能量,方向和相互作用顶点。基于仿真,我们提供了1-100 GEV能量范围的比较与当前ICECUBE分析中使用的当前最新最大似然技术,包括已知系统不确定性的影响。对于中微子事件分类,与当前的IceCube方法相比,GNN以固定的假阳性速率(FPR)提高了信号效率的18%。另外,GNN在固定信号效率下将FPR的降低超过8(低于半百分比)。对于能源,方向和相互作用顶点的重建,与当前最大似然技术相比,分辨率平均提高了13%-20%。当在GPU上运行时,GNN能够以几乎是2.7 kHz的中位数ICECUBE触发速率的速率处理ICECUBE事件,这打开了在在线搜索瞬态事件中使用低能量中微子的可能性。
translated by 谷歌翻译
在本文中,我们提出了一种新颖的端到端用户定义的关键字发现方法,该方法利用语言和文本序列之间的语言相应模式。与需要语音关键字注册的先前方法不同,我们的方法将输入查询与注册文本关键字序列进行比较。为了将音频和文本表示形式放置在共同的潜在空间中,我们采用了一种基于注意力的跨模式匹配方法,该方法以端到端的方式进行了训练,并具有单调匹配的损失和关键字分类损失。我们还利用了声学嵌入网络的拖延损失来改善嘈杂环境中的鲁棒性。此外,我们介绍了Libriphrase数据集,这是一种基于LibrisPeech的新短语数据集,用于有效训练关键字斑点模型。与其他单模式和跨模式基线相比,我们提出的方法在各种评估集上取得了竞争性结果。
translated by 谷歌翻译
长期护理(LTC)居民的一半营养不良的住院治疗,死亡率,发病率较低。当前的跟踪方法是主观和耗时的。本文介绍了专为LTC设计的自动食品成像和营养进气跟踪(AFINI-T)技术。我们提出了一种用于食品分类的新型卷积Automencoder,在我们的模拟LTC食物摄入数据集上培训了用于食品分类,并在我们的模拟LTC食物摄入数据集上进行测试(每种餐路;每次最多15级;前1个分类准确度:88.9%;意味着进气错误: - 0.4 ml $ \ PM $ 36.7毫升)。营养摄入量的估计与质量的营养估计与质量($ ^ 2 $ 0.92至0.99)之间的营养估计与方法之间的良好符合($ \ sigma $ = -2.7至-0.01;零在协议的每一个限制中, 。 AFINI-T方法是深度学习的动力计算营养传感系统,可以提供更准确地和客观地跟踪LTC驻留食物摄入量的新颖手段,以支持和防止营养不良跟踪策略。
translated by 谷歌翻译
Unsupervised Domain Adaptation (UDA) makes predictions for the target domain data while manual annotations are only available in the source domain. Previous methods minimize the domain discrepancy neglecting the class information, which may lead to misalignment and poor generalization performance. To address this issue, this paper proposes Contrastive Adaptation Network (CAN) optimizing a new metric which explicitly models the intra-class domain discrepancy and the inter-class domain discrepancy. We design an alternating update strategy for training CAN in an end-to-end manner. Experiments on two real-world benchmarks Office-31 and VisDA-2017 demonstrate that CAN performs favorably against the state-of-the-art methods and produces more discriminative features.
translated by 谷歌翻译
Many recent works on understanding deep learning try to quantify how much individual data instances influence the optimization and generalization of a model, either by analyzing the behavior of the model during training or by measuring the performance gap of the model when the instance is removed from the dataset. Such approaches reveal characteristics and importance of individual instances, which may provide useful information in diagnosing and improving deep learning. However, most of the existing works on data valuation require actual training of a model, which often demands high-computational cost. In this paper, we provide a training-free data valuation score, called complexity-gap score, which is a data-centric score to quantify the influence of individual instances in generalization of two-layer overparameterized neural networks. The proposed score can quantify irregularity of the instances and measure how much each data instance contributes in the total movement of the network parameters during training. We theoretically analyze and empirically demonstrate the effectiveness of the complexity-gap score in finding 'irregular or mislabeled' data instances, and also provide applications of the score in analyzing datasets and diagnosing training dynamics.
translated by 谷歌翻译
While the capabilities of autonomous systems have been steadily improving in recent years, these systems still struggle to rapidly explore previously unknown environments without the aid of GPS-assisted navigation. The DARPA Subterranean (SubT) Challenge aimed to fast track the development of autonomous exploration systems by evaluating their performance in real-world underground search-and-rescue scenarios. Subterranean environments present a plethora of challenges for robotic systems, such as limited communications, complex topology, visually-degraded sensing, and harsh terrain. The presented solution enables long-term autonomy with minimal human supervision by combining a powerful and independent single-agent autonomy stack, with higher level mission management operating over a flexible mesh network. The autonomy suite deployed on quadruped and wheeled robots was fully independent, freeing the human supervision to loosely supervise the mission and make high-impact strategic decisions. We also discuss lessons learned from fielding our system at the SubT Final Event, relating to vehicle versatility, system adaptability, and re-configurable communications.
translated by 谷歌翻译
An unbiased scene graph generation (SGG) algorithm referred to as Skew Class-balanced Re-weighting (SCR) is proposed for considering the unbiased predicate prediction caused by the long-tailed distribution. The prior works focus mainly on alleviating the deteriorating performances of the minority predicate predictions, showing drastic dropping recall scores, i.e., losing the majority predicate performances. It has not yet correctly analyzed the trade-off between majority and minority predicate performances in the limited SGG datasets. In this paper, to alleviate the issue, the Skew Class-balanced Re-weighting (SCR) loss function is considered for the unbiased SGG models. Leveraged by the skewness of biased predicate predictions, the SCR estimates the target predicate weight coefficient and then re-weights more to the biased predicates for better trading-off between the majority predicates and the minority ones. Extensive experiments conducted on the standard Visual Genome dataset and Open Image V4 \& V6 show the performances and generality of the SCR with the traditional SGG models.
translated by 谷歌翻译
Attention mechanisms form a core component of several successful deep learning architectures, and are based on one key idea: ''The output depends only on a small (but unknown) segment of the input.'' In several practical applications like image captioning and language translation, this is mostly true. In trained models with an attention mechanism, the outputs of an intermediate module that encodes the segment of input responsible for the output is often used as a way to peek into the `reasoning` of the network. We make such a notion more precise for a variant of the classification problem that we term selective dependence classification (SDC) when used with attention model architectures. Under such a setting, we demonstrate various error modes where an attention model can be accurate but fail to be interpretable, and show that such models do occur as a result of training. We illustrate various situations that can accentuate and mitigate this behaviour. Finally, we use our objective definition of interpretability for SDC tasks to evaluate a few attention model learning algorithms designed to encourage sparsity and demonstrate that these algorithms help improve interpretability.
translated by 谷歌翻译